spread misinformation
French Prosecutors Raid X Offices and Summon Musk as U.K. Launches New Probe Into Grok
French prosecutors carried out a search on the offices of Elon Musk's social media platform X on Tuesday morning and summoned the billionaire owner to attend a hearing in April. Conducted by the cybercrime unit of the Paris prosecutor's office, along with the French national cyber unit and European Union police agency Europol, the search marks an escalation of the ongoing investigation into X over suspected abuse of algorithms, plus allegations related to deepfake images and wider concerns over posts generated by the platform's AI chatbot, Grok. The office said the search was carried out with "the objective of ultimately ensuring the compliance of the X platform with French law" and in particular, a focus on X's Grok, designed by xAI, which chief prosecutor Laure Beccuau says has led "to the dissemination of Holocaust denial content and sexually explicit deepfakes." Europol spokesperson Jan Op Gen Oorth is quoted as telling Associated Press that the police agency "is supporting the French authorities in this." Musk and former CEO of X, Linda Yaccarino, have both been summoned for "voluntary interviews" with French prosecutors on April 20.
- Europe > France (1.00)
- Europe > United Kingdom (0.52)
- North America > United States (0.30)
- Africa (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government > France Government (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.59)
- Information Technology > Communications > Social Media (0.36)
U.K. Cracks Down on AI 'Nudify' Tech, Announces Investigation Into X
In this photo illustration, a screen displays examples of AI prompt-created videos, made with Xai's Grok app, on January 12, 2026 in London, England. In this photo illustration, a screen displays examples of AI prompt-created videos, made with Xai's Grok app, on January 12, 2026 in London, England. The United Kingdom plans to bring into force a law that criminalizes the creation of non-consensual sexualized images, including through Grok, the chatbot within Elon Musk's X application, following the app's deepfake scandal of the last few weeks. "This means individuals are committing a criminal offence if they create--or seek to create--such content--including on X--and anyone who does this should expect to face the full extent of the law," Technology Secretary Liz Kendal announced in the House of Commons Monday, adding that the government would work to also make it illegal for companies to supply the tools designed to create these nonconsensual images. The move came just hours after the Office of Communications (Ofcom)--the country's independent regulator for the communications industry--announced that it will be investigating X and the thousands of pornographic images generated by Grok that flooded the app, including sexualized images of what appear to be minors.
- Europe > United Kingdom > England > Greater London > London (0.46)
- North America > United States (0.30)
- Europe > France (0.06)
- (5 more...)
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
Grok's deepfake crisis, explained
Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? In the past few weeks, many tech leaders have made bold predictions about what AI will achieve in 2026, from mastering the field of biology to surpassing human intelligence outright . But in 2026's first week, the most visible use of AI has been X users employing Grok to digitally disrobe women. Elon Musk's platform X has been flooded with nonconsensual AI-created images, requested by users, of unclothed or scantily-clad women, men and children, sometimes in sexual positions.
- North America > United States (0.15)
- Europe > France (0.06)
- Asia > Malaysia (0.05)
- (2 more...)
- Information Technology > Security & Privacy (0.46)
- Media > News (0.32)
How AI Is Being Used to Spread Misinformation--and Counter It--During the L.A. Protests
Here's how AI has been used during the L.A. protests. Provocative, authentic images from the protests have captured the world's attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated. Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google's new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events.
- North America > United States > California > Los Angeles County > Los Angeles (0.45)
- Europe > France (0.09)
- Media > News (1.00)
- Government (1.00)
The rise of AI: When will Congress regulate it?
Fox News chief political anchor Bret Baier has the latest on the pros and cons of the bombshell developments on'Special Report.' It is said that predicting the future isn't magic. If that's the case, perhaps we should ask AI when Congress might pass a bill to regulate the emerging technology – before it spirals out of control. There's a push by Congressional leaders to approve a bill regulating AI when lawmakers return to Washington after the election. But the path to passage - and developing a consensus on establishing guardrails for AI - is far from certain.
- Europe (0.17)
- North America > United States > New York (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.98)
- Media > News (0.72)
- Information Technology > Security & Privacy (0.71)
Deception in democracy: Beware the most common types of election-related scams
Pennsylvania Secretary of State Al Schmidt said mail-in ballots cannot be counted until 7 a.m. on Election Day under state law preventing pre-canvassing, so voters should not expect the final results to be available on election night. Elections are one of the most crucial parts of any democracy, and unfortunately that also means bad actors try to twist things for their own gain. With the U.S. general elections just around the corner, cybersecurity risks are ramping up, not just to the systems running the election but also to you. Social media and the internet are being used to spread propaganda and sway your opinions. What's even more concerning is that these campaigns are now powered by AI tools, making it very easy for bad actors to churn out misleading information at lightning speed and on a huge scale.
- North America > United States > Pennsylvania (0.25)
- North America > United States > West Virginia (0.05)
- North America > United States > New Jersey > Essex County > Newark (0.05)
Tech expert warns 2024 will see 'explosion of AI-powered cybercrime'- and 27 US government agencies are currently using these systems in place of human
A tech expert has warned that new advances in AI-powered technology will lead to an'explosion' in cybercrime in 2024. Shawn Henry, the chief security officer for CrowdStrike, recently shared how cybercriminals can use AI to sneak through individuals' cybersecurity defenses, spread misinformation, or infiltrate corporate networks. Cybercriminals can use AI to mislead people into believing false narratives during the election season and potentially giving up sensitive information, said the retired executive assistant director of the Federal Bureau of Investigation (FBI). The cybersecurity veteran's warning comes when AI has been given more jobs than ever, including in the US federal and state governments. Twenty-seven departments of the US federal government have deployed AI in some way, and many states have, too.
- North America > United States > Utah (0.07)
- North America > United States > Texas (0.07)
- North America > United States > Ohio (0.07)
Artificial Intelligence software used to spread misinformation in Venezuela
Artificial Intelligence software is being used to spread misinformation in Venezuela. Stefano Pozzebon takes a look at how it's being distributed, and how to spot fake video. Former Maryland Gov. Hogan's ex-chief of staff dies after confrontation with FBI agent'Lucky fire-breathing dragon': UConn's head coach admits to wearing this during games GOP lawmaker hands out'indict this!' ham sandwiches on Capitol Hill Listen to Trump's defiant message after being indicted Disney quietly takes power from Florida governor's board'Shark Tank' star reacts to Senate hearing on bank failures
- South America > Venezuela (0.74)
- North America > United States > Maryland (0.29)
- Europe > Finland (0.09)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
The Rise of AI in Cyber Attacks: Risks and Challenges
Artificial intelligence (AI) has the potential to revolutionize many aspects of our lives, including the way we conduct cyber attacks. While the use of AI for cyber attacks is still in its early stages, it is likely that it will become more prevalent in the near future. There are a number of ways in which AI could be used to enhance the capabilities of hackers and other malicious actors. For example, AI could be used to automate and speed up the process of identifying and exploiting vulnerabilities in computer systems. This could make it easier for hackers to gain access to sensitive information or disrupt critical systems.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.87)
OpenAI releases AI tool that can produce an image from text
OpenAI researchers have created a new system that can produce a full image, including of an astronaut riding a horse, from a simple plain English sentence. Known as DALL·E 2, the second generation of the text to image AI is able to create realistic images and artwork at a higher resolution than its predecessor. The artificial intelligence research group won't be releasing the system to the public. The new version is able to create images from simple text, add objects into existing images, or even provide different points of view on an existing image. Developers imposed restrictions on the scope of the AI to ensure it could not produce hateful, racist or violent images, or be used to spread misinformation.